2 - Deep Learning [ID:9002]
50 von 1171 angezeigt

Okay, so welcome everybody. So last week you had the pleasure of listening to Tobias

Lofven because I was still in the US, but starting from this week I'm back so I can

do actually the lecture. And I heard that in the last week you were talking a lot about

the introduction of deep learning and all the really awesome stuff that you can do

with it, but seemingly the theory went in a little short, because when you were trying

to talk about perceptron, already lecture time was over. So let's see if we can continue

today, and one more thing just reminded me I need a power, because otherwise my computer

will go to standby. Okay, so power anywhere here? Okay, good. So, bing, now we can start.

So last week you heard about the introduction and now we want to continue finishing off

the slides with the introduction and we have still some things to discuss and one of the

things that we want to discuss is the perceptron.

So this is one of the very early approaches to artificial intelligence and it's inspired

by biology and in particular neural excitation and you see everything that I talk about here

in biology, it's really overly simplified but it's only a kind of motivation.

So this is the artificial neural networks that we are going to use only, let's say,

to some degree inspired by brain and neurons but actually what we are doing is quite different.

Still this is one of the reasons why this kind of technology is super popular because

people say, oh look, it's like a real brain, but it's not like a real brain, what we are

doing.

Okay?

But there are some things in parallel, for example, that you have neurons and those neurons

now have some incoming information and if this is above a certain threshold, they will

fire, so they will give a new signal.

And essentially it's all or nothing, so the higher the stimulus is, it doesn't matter.

Even if you have a very high stimulus, it will still just fire once and if you are very

close to the threshold and you don't go above the threshold, then it won't fire.

So it's like a binary classifier.

This is then also the reason why so-called sigmoid functions and threshold-like functions

became really popular for activations.

So let's see how we can map this and we can map this to a kind of mathematical framework

and the idea is fairly simple.

So this is the Rosenblatt perceptron and it is simply a binary classification.

So it maps to the values minus one and one, so this means fire or not fire, and you have

essentially only the function that can be expressed like this one here, so the output

of your perceptron is nothing else than the sign of the input weights in a product with

the input.

So X is a vector, is an input, so we can see it's X1, X2 to Xn.

So simply the input and then there is also one as input, which is essentially the bias

and you multiply those with a vector of weights and these weights are the things that we want

to learn later on.

So weights are essentially the information that is altered in order to mimic the learning

process.

So then everything that is done here is simply you do element-wise multiplication and summation

and we can use a short notation here that we simply write this as a vector product.

So if you have a vector W and a vector X, by the way we augment here X with a one and

we don't note it down because otherwise we would have to, so you could also write it

as a vector W plus inner product vector W times X plus some W0.

But here we use a simplified notation so you can see already this is a very difficult task,

an advanced task, so sometimes we use very sloppy notation, but if we write this down

here we actually mean that we extended this vector here with a one.

Teil einer Videoserie :

Zugänglich über

Offener Zugang

Dauer

01:28:39 Min

Aufnahmedatum

2018-04-18

Hochgeladen am

2018-04-18 20:19:03

Sprache

en-US

Tags

rule feed feature backpropagation layer decision activation problem output analytic forward learning functions gradient function classification
Einbetten
Wordpress FAU Plugin
iFrame
Teilen